New AI institute to focus on the speech language pathology needs of children

1/11/2023 Vincent Lara-Cinisomo, AHS

The National Science Foundation-funded effort will address the nationwide shortage of speech-language pathologists, ensure at-risk children receive timely, effective assistance

Written by Vincent Lara-Cinisomo, AHS

The University of Illinois is part of a nine-university consortium led by the University of Buffalo that has been awarded a $20 million grant by the National Science Foundation to establish a national institute that develops artificial intelligence systems that identify and assist young children with speech and/or language processing challenges. The award will establish the AI Institute for Exceptional Education to advance foundational AI technologies, human-centered AI design, and learning science that improve educational outcomes for young children. 

The University at Buffalo has partnered with GiGi’s Playhouse Down Syndrome Achievement Center of Buffalo to provide UB students with in-classroom experience teaching students with disabilities. UB students were photographed at the center working with clients in July 2021.
Photo Credit: Meredith Forrest Kulwicki
The new AI Institute for Exceptional Education will advance foundational AI technologies, human-centered AI design, and learning science that improve educational outcomes for young children.

The institute will help address the nationwide shortage of speech-language pathologists and provide services to children ages 3 to 10 who are at increased risk of falling behind in their academic and socio-emotional development – issues exacerbated by the COVID-19 pandemic.

Pamela Hadley, professor and head of the Department of Speech and Hearing Science at the University of Illinois Urbana-Champaign, is one of the co-principal investigators for the grant. 

“In light of the shortage of speech-language pathologists nationwide, there is a pressing need to develop health technologies that can help identify young children at-risk for speech and language disorders at younger ages and do so more efficiently,” said Hadley, a fellow of the American Speech-Language Hearing Association. “Our multidisciplinary team will enhance automatic speech recognition systems, improving early identification and interventions for children with developmental language disorder and other conditions that affect speech and language. Our team will also create advanced artificial intelligence systems that will support tailored interventions for children on the caseloads of speech-language pathologists. By doing so, we will create educational environments that help children thrive socially and academically.”

Institute will help underserved students

The AI Institute for Exceptional Education will focus on serving the millions of children nationwide who, under the Individuals with Disabilities Education Act, require speech and language services.

Heng Ji
Photo Credit: University of Illinois
Professor Heng Ji

Heng Ji, Illinois Computer Science professor and investigator on this project, lends her expertise in this area to the effort.

“This project aims to apply advanced AI techniques to assist young children with speech and/or language processing challenges,” Ji said. “I will contribute in two ways. The first is semantic graph based knowledge selection for reading comprehension. Given a document a child is reading, our AI system will construct semantic knowledge graph from the document, select knowledge points to generate questions and answers automatically to assist the child's reading comprehension.

“The second way is through automatic social norm checking for dialogues. A good dialogue follows social norms agreed by both parties.  Although a social norm may be inferred directly from a single communicative act (e.g., use of honorifics), in many situations, a social norm will need to be inferred holistically by combining several multimodal communicative behavior indicators and situational features. To guide sociocultural norm discovery we aim to discover social norms automatically, and give alerts automatically if any ongoing conversations violate the norms.”

Specially, the project will develop two advanced AI solutions: the AI Screener for early identification of potential speech and/or language disorders; and the AI Orchestrator, which will act as a virtual teaching assistant by providing students with ability-based interventions.

The AI Screener will listen to and observe children in the classroom, collecting samples of children’s speech, facial expressions, gestures and other data. It will create weekly summaries of these interactions that catalogue each child’s vocabulary, pronunciation, video snippets and more. These summaries will help teachers monitor their students’ speech and language abilities and, if needed, suggest a formal evaluation with a speech-language pathologist.

The AI Orchestrator is an app that will help speech-language pathologists, most of whom have caseloads so large that they must provide group-based interventions for children instead of individualized care. The app addresses this by recommending personalized content tailored to students’ needs. It continues to monitor students’ progress and adjusts lesson plans to ensure that the interventions are working.

"The AI Institute for Exceptional Education follows 18 already established NSF-led AI Institutes, an ecosystem of AI research and education in pursuit of transformational advances in AI research and development of AI-powered innovation," NSF Program Director James Donlon said. "We are happy to welcome this new team to the AI Institutes program."

Institute comprises top research universities

The institute will consist of more than 30 researchers from nine universities including the University of Buffalo; Stanford University; the University of Washington; Cornell University; the University of Nevada, Reno; the University of Texas at El Paso; Penn State University; and the University of Oregon.

Other investigators at Illinois are Mark Hasegawa-Johnson (Electrical and Computer Engineering), Yun Huang (Information Science), Hedda Meadan-Kaplansky (Special Education), and Windi Krok (Speech and Hearing Science).

"We are eager to see how this team advances AI research to develop better solutions for children with specific speech-language needs, as well as their families and the U.S. schools who serve them. This project is a great example of how we can harness the opportunities that AI technologies can offer to enhance the services that our nation can offer the American people," NSF Program Director Fengfeng Ke said.


Read the original story from the College of Applied Health Sciences.


Share this story

This story was published January 11, 2023.